-
-
Notifications
You must be signed in to change notification settings - Fork 13.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update provider ecosystem and enhance functionality #2246
Conversation
Hi! Thanks for your work! Tested on 24.09 (14:40 UTC +3)
And Removed: CodeNews but below you send updates for CodeNews. Typo? |
…cs/providers-and-models.md'
Okeeey now i provide list some problems (or thats my problem only)
My code for test:
Thanks! |
For bypassing Cloudflare Protection take a look at this: https://github.com/FlareSolverr/FlareSolverr |
@kqlio67 I saw that you deleted all provides in the "deprecated" folder. It would be better to keep them for future reference and in case they get un-patched ... Other than that, very good pull, will merge soon when you add them back. Also I am thinking about implementing a cloudflare solving mechanism, that will be available to all providers if needed |
Hello! @TheFirstNoob Response to point 1. (Return CodeNews) Response to point 2. (AsyncClient (g4f func) for images) import asyncio
from g4f.client import Client
async def main():
client = Client()
images = await client.async_images()
response = await images.async_generate(
model="flux",
prompt="a white siamese cat",
)
image_url = response.data[0].url
print(f"Generated image URL: {image_url}")
if __name__ == "__main__":
asyncio.run(main()) Code execution result:
Judging from the result, AsyncClient successfully generates an image URL based on the provided prompt "a white siamese cat" using the "flux" model. If you don't encounter any errors when using AsyncClient for image generation and you get the expected result, then the problem you reported earlier may be specific to a certain usage scenario or depend on other factors. If you still encounter problems when using AsyncClient to generate images in other parts of your code, please provide more details about the specific scenario where the error occurs so that I can better understand the issue and try to help you. Response to point 3. (AsyncClient (g4f func) for vision) Although I haven't tested it with Replicate, it should work: import requests
import asyncio
import base64
from g4f.client import AsyncClient
from g4f.Provider import DeepInfraChat, Blackbox, Replicate
import g4f.debug
g4f.debug.logging = True
g4f.debug.verbose = True
g4f.debug.version_check = False
def to_data_uri(image_data):
base64_image = base64.b64encode(image_data).decode('utf-8')
return f"data:image/jpeg;base64,{base64_image}"
async def analyze_image(image_url, provider, model=None, api_key=None):
client = AsyncClient(provider=provider, api_key=api_key)
image_data = requests.get(image_url).content
image_base64 = to_data_uri(image_data)
if provider == Blackbox:
messages = [
{
"role": "user",
"content": "What is on this image? Describe it in detail.",
"data": {
"imageBase64": image_base64,
"fileText": "image.jpg"
}
}
]
elif provider == Replicate:
image = requests.get(image_url, stream=True).raw
messages = [{"role": "user", "content": "What is on this image? Describe it in detail."}]
else:
messages = [
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {"url": image_base64}
},
{
"type": "text",
"text": "What is on this image? Describe it in detail."
}
]
}
]
try:
if provider == Replicate:
response = await client.chat.completions.create(
messages=messages,
model=model,
image=image
)
else:
response = await client.chat.completions.create(
messages=messages,
model=model
)
if isinstance(response, str):
return response
return response.choices[0].message.content
except Exception as e:
return f"Error: {str(e)}"
async def main():
image_url = "https://kartinki.pics/pics/uploads/posts/2022-09/thumbs/1663711643_52-kartinkin-net-p-yaponskaya-yenotovidnaya-sobaka-tanuki-pin-54.jpg"
providers = [
(DeepInfraChat, "openbmb/MiniCPM-Llama3-V-2_5", None),
(Blackbox, "blackbox", None),
(Replicate, "yorickvp/llava-13b", "your_replicate_api_key_here")
]
for provider, model, api_key in providers:
print(f"\nTrying with {provider.__name__} and model {model}")
result = await analyze_image(image_url, provider, model, api_key)
print(result)
if __name__ == "__main__":
asyncio.run(main()) Here are the results:
Key modifications:
2. Code structure:
3. Image processing:
4. Message formatting:
5. API calls:
6. Error handling:
7. Configuration:
8. Flexibility:
Try running this modified code and see if you can successfully pass the image to the Replicate provider and receive a response. If you encounter any issues, please report the error or provide additional information so I can assist you further. Response to point 4. (Prodia provider) |
@Felitendo Thanks for the useful advice! I'll definitely take a look at the FlareSolverr project for bypassing Cloudflare security. It can be very useful to improve the functionality of the project. I appreciate your input and the time you took to share this resource. |
@xtekky Thank you for your feedback! I understand your point about keeping the deprecated providers and agree that it could be useful for future reference. I have already restored the Regarding the Cloudflare solving mechanism - that sounds like a great idea that could significantly improve functionality for all providers. While I'm still in the learning process, I've been actively contributing useful fixes that have been accepted, and I'm quickly picking up new skills. I'm excited to see how this develops and potentially contribute where I can. Please let me know if any additional changes or clarifications are needed for my pull request. I'm eager to continue learning and improving my contributions to the project. Thank you for considering my changes and for providing these opportunities to grow and contribute! |
@kqlio67 Hello! Thanks for reply!
Result: |
HuggingChat now use updated list new models:
Old models (remove?):
UPDATED: Qvis1.6: https://huggingface.co/spaces/AIDC-AI/Ovis1.6-Gemma2-9B NEW GEMINI PROVIDER (to reverse) |
No problem, thanks for fixing #2228 btw :) |
|
@TheFirstNoob Thank you for bringing this to my attention! I've just tried running your code and it seems to be working now. Here's what I got:
It looks like the issue has been resolved. The image generation is now functioning correctly without the error you encountered earlier. Regarding the documentation, it seems that after recent updates and contributions, it might not be fully up-to-date. I'm sure the project maintainers or other contributors will update it soon to reflect the current functionality. Thanks again for your input. It's great to see the community helping each other out! |
@kqlio67 Yooooo, its biggers work update! Thank you very much for your work! I test fully code working later after merge. |
…py g4f/Provider/DeepInfraChat.py
Remove the AIChatFree Provider please and add https://gprochat.com/. It's the original version and it's also more up to date (look at the copyright date on the bottom). |
@Felitendo, thanks for the suggestion! I've gone ahead and added the new GPROChat provider from https://gprochat.com/, as you recommended. However, I decided to keep the AIChatFree provider for now, since it's still working. I don't see a reason to remove a provider if it's functioning properly. This way, users will have more options to choose from. Of course, if any issues come up with AIChatFree in the future, I'll consider removing it then. But for now, I think it's best to keep both providers available, as they're both working and giving users additional choices. |
Alright that's fine. I just thought that we have too much providers already, so maybe you could move it in the deprecated folder to restore later on if needed. But if you don't that's fine too :) |
Hi @TheFirstNoob, You asked about how to use the Prodia provider for generating images. Below is an example of how you can obtain the URL of an image generated by the model using an asynchronous client: import asyncio
from g4f.client import AsyncClient
from g4f.Provider import Prodia
async def main():
# Create an AsyncClient with the specified image provider
client = AsyncClient(image_provider=Prodia)
# Generate an image based on the prompt
response = await client.images.generate(
model="absolutereality_v181.safetensors [3d9d4d2b]",
prompt="a white siamese cat"
)
# Check if there are any images in the response
if response.data:
# Loop through and print the URL of each generated image
for img in response.data:
print(img.url)
else:
print("No images found.")
# Run the main function
asyncio.run(main()) Result: https://images.prodia.xyz/b31c5be8-7850-4b5b-b4ba-e8eef3aed957.png This should help you get started. Even if you're already familiar, it might still be useful for you or others who come across it. |
@kqlio67 Hi! Very thanks for provide this code and your work for this :) |
New providers added
g4f/Provider/DeepInfraChat.py
llama-3.1-405b, llama-3.1-70b, llama-3. 1-8B, mixtral-8x22b, mixtral-8x7b, wizardlm-2-8x22b, wizardlm-2-7b, qwen-2-72b, phi-3-medium-4k, gemma-2b-27b, minicpm-llama-3-v2.5, mistral-7b, lzlv_70b, openchat-3.6-8b, phind-codellama-34b-v2, dolphin-2.9.1-llama-3-70b
.minicpm-llama-3-v2.5
.g4f/Provider/AIChatFree.py
(Update provider ecosystem and enhance functionality #2246 (comment))gemini-pro
g4f/Provider/ChatHub.py
llama-3.1-8b, mixtral-8x7b, gemma-2, sonar-online
g4f/Provider/GPROChat.py
(Update provider ecosystem and enhance functionality #2246 (comment))gemini-pro
Removed providers
g4f/Provider/GptTalkRu.py
g4f/Provider/Snova.py
(remove snova as provider as they changed to api key model. #2253)g4f/Provider/TwitterBio.py
g4f/Provider/Vercel.py
g4f/Provider/CodeNews.py
g4f/Provider/unfinished/
Fixing the G4F enhancement
g4f/client/async_client.py
etc/unittest/async_client.py
Refactor async tests for chat completions (etc/unittest/async_client.py)
Provider fixes and improvements
g4f/Provider/AI365VIP.py
Update the AI365VIP provider to support new models and aliases
g4f/Provider/Airforce.py
(Advertisement in answers #2233)generate_text
method to handle message history correctlygenerate_text
methodgenerate_text
:generate_image
:generate_image
g4f/Provider/Bixin123.py
gpt-3.5-turbo
to the list of supported modelsg4f/Provider/Blackbox.py
webSearchMode
with a default value of False to the data object sent in the API requestgpt-4o, claude-3.5-sonnet, gemini-pro
([Request] More models for Blackbox #2238)g4f/Provider/Chatgpt4o.py
gpt-4o-mini-2024-07-18
gpt-4o-mini-2024-07-18
as the only available modelg4f/Provider/DDG.py
create_async_generator
for better readability and efficiencyConversation
class and use dict for conversation stateget_model
methodg4f/Provider/Liaobots.py
gpt-3.5-turbo
.g4f/Provider/LiteIcoding.py
g4f/Provider/MagickPen.py
g4f/Provider/ChatGpt.py
create_completion
.g4f/Provider/ChatGptEs.py
AsyncGeneratorProvider
for async supportget_model
methodg4f/Provider/Upstage.py
Change default model from
upstage/solar-1-mini-chat
tosolar-pro
and includesolar-pro
in the models list.g4f/Provider/HuggingChat.py
(Update provider ecosystem and enhance functionality #2246 (comment))Qwen/Qwen2.5-72B-Instruct
Qwen2.5-72B
modelHermes-3-Llama-3.1-8B
andMistral-Nemo-Instruct-2407
Phi-3.5-mini-instruct
g4f/Provider/Nexra.py
NexraBing
,NexraChatGPT
,NexraChatGPT4o
,NexraChatGPTWeb
,NexraGeminiPro
,NexraImageURL
,NexraLlama
, andNexraQwen
sdxl-turbo
as the default image modelThe problem with providers
g4f/Provider/AI365VIP.py
g4f/Provider/AiChatOnline.py
g4f/Provider/AiChats.py
g4f/Provider/Bing.py
g4f/Provider/Chatgpt4o.py
g4f/Provider/Chatgpt4Online.py
g4f/Provider/ChatgptFree.py
g4f/Provider/FreeNetfly.py
g4f/Provider/Koala.py
g4f/Provider/PerplexityLabs.py